28 research outputs found

    Parallel transfer evolution algorithm

    Get PDF
    Parallelization of an evolutionary algorithm takes the advantage of modular population division and information exchange among multiple processors. However, existing parallel evolutionary algorithms are rather ad hoc and lack a capability of adapting to diverse problems. To accommodate a wider range of problems and to reduce algorithm design costs, this paper develops a parallel transfer evolution algorithm. It is based on the island-model of parallel evolutionary algorithm and, for improving performance, transfers both the connections and the evolutionary operators from one sub-population pair to another adaptively. Needing no extra upper selection strategy, each sub-population is able to select autonomously evolutionary operators and local search operators as subroutines according to both the sub-population's own and the connected neighbor's ranking boards. The parallel transfer evolution is tested on two typical combinatorial optimization problems in comparison with six existing ad-hoc evolutionary algorithms, and is also applied to a real-world case study in comparison with five typical parallel evolutionary algorithms. The tests show that the proposed scheme and the resultant PEA offer high flexibility in dealing with a wider range of combinatorial optimization problems without algorithmic modification or redesign. Both the topological transfer and the algorithmic transfer are seen applicable not only to combinatorial optimization problems, but also to non-permutated complex problems

    Robotic disassembly sequence planning with backup actions

    Get PDF

    An Investigation of LLMs' Inefficacy in Understanding Converse Relations

    Full text link
    Large Language Models (LLMs) have achieved remarkable success in many formal language oriented tasks, such as structural data-to-text and semantic parsing. However current benchmarks mostly follow the data distribution of the pre-training data of LLMs. Therefore, a natural question rises that do LLMs really understand the structured semantics of formal languages. In this paper, we investigate this problem on a special case, converse binary relation. We introduce a new benchmark ConvRe focusing on converse relations, which contains 17 relations and 1240 triples extracted from popular knowledge graph completion datasets. Our ConvRE features two tasks, Re2Text and Text2Re, which are formulated as multi-choice question answering to evaluate LLMs' ability to determine the matching between relations and associated text. For the evaluation protocol, apart from different prompting methods, we further introduce variants to the test text and few-shot example text. We conduct experiments on three popular LLM families and have observed various scaling trends. The results suggest that LLMs often resort to shortcut learning and still face challenges on our proposed benchmark.Comment: Accepted by EMNLP 202

    Configurable Intelligent Optimization Algorithm

    No full text
    Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algori

    Configurable intelligent optimization algorithm: design and practice in manufacturing

    No full text
    Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algori

    Modelling of robotic disassembly line balancing

    No full text

    Solutions for Mixed-Model Disassembly Line Balancing with Multi-robot Workstations

    No full text
    This chapter discusses serial paced mixed-model disassembly line balancing with multi-robot workstations (MDLB-MR). The main difference between MDLB-MR and classic simple disassembly lines is the number of robots that can be allocated to each workstation. In MDLB-MR, a set of EOL products can be simultaneously disassembled and each product has its own set of precedence relations. The chapter compares the performance of different evolutionary algorithms (EAs) and concludes that the Problem-specific Bi-criterion EA (PBEA) outperforms the other EAs tested. Furthermore, a combination of non-Pareto-based EAs with the Pareto selection criterion can be effective at solving MDLB-MR problems.</p

    Solutions for robotic disassembly line balancing

    No full text
    corecore